feat(chat): per-user thread isolation#13
Conversation
- Simplify Tambo base URL resolution to use server-side `TAMBO_URL` only - Centralize missing `TAMBO_API_KEY` error message and reuse in SSE/proxy paths - Remove incorrect `reasoning` cast from message mapping - Strongly type `AdvanceStreamRequestBody` and validate parsed JSON - Preserve and forward `availableComponents`, `forceToolChoice`, and `toolCallCounts` to Tambo - Stop overriding `created_at` when inserting/upserting messages to keep DB timestamps authoritative - Normalize SSE line endings (` ` → ` `) before parsing event lines - Disable auto thread name generation in `ChatClient` to avoid conflicts with server-side naming - Extend middleware matcher to cover `/api/tambo/:path*` so Supabase auth/session middleware runs for Tambo API calls
There was a problem hiding this comment.
Key endpoints in src/app/api/tambo/[...path]/route.ts rely on RLS for tenant isolation but do not scope queries by user_id, creating a high-risk footgun if policies are ever misconfigured. The SSE streaming/persistence flow can leave inconsistent state (persisting user messages even when the upstream request fails) and may duplicate messages due to unstable ID mapping. message-suggestions.tsx appears to disable generated suggestions entirely after the thread starts, likely a regression. The migration contains destructive deletes that should not run automatically in production environments.
Summary of changes
Summary
This PR updates the app to proxy all Tambo API traffic through a new authenticated Next.js route and introduces per-user thread/message persistence in Supabase.
Key changes
- Tambo proxy + persistence API: adds
src/app/api/tambo/[...path]/route.tsimplementing:- authenticated
/api/tamboproxy to Tambo with server-sideTAMBO_API_KEY - local handlers for thread list/retrieve/update/name generation
- SSE
/threads/advancestreampassthrough that persists streamed messages to Supabase
- authenticated
- Per-user thread isolation:
- adds migration
supabase/migrations/20260207_per_user_threads.sqlcreatingthreadsandmessagestables with RLS policies
- adds migration
- Chat page split:
src/app/chat/page.tsxbecomes a server component that enforces auth + key presence and renders a new client componentsrc/app/chat/chat-client.tsx
- Client configuration updates:
TamboProvidernow usestamboUrl="/api/tambo"andapiKey="unused"in README/chat/interactablesexample.env.localswitches fromNEXT_PUBLIC_TAMBO_API_KEYto server-onlyTAMBO_API_KEY
- Suggestions UX change:
message-suggestions.tsxreplaces generated suggestions hook with setting the input value directly
- Middleware matcher broadened:
- includes
/chat/:path*and/api/tambo/:path*so Supabase session middleware runs for those routes
- includes
| async function handleThreadsList( | ||
| supabase: Awaited<ReturnType<typeof createSupabaseServerClient>>, | ||
| userId: string, | ||
| ) { | ||
| const { data, error } = await supabase | ||
| .from("threads") | ||
| .select("id, created_at, updated_at, name, metadata") | ||
| .order("updated_at", { ascending: false }); | ||
|
|
||
| if (error) { | ||
| return jsonError(error.message, 500); | ||
| } | ||
|
|
||
| const items = (data as unknown as ThreadRow[]).map((row) => | ||
| threadFromRow(row, userId), | ||
| ); | ||
|
|
||
| return NextResponse.json({ | ||
| items, | ||
| total: items.length, | ||
| count: items.length, | ||
| }); |
There was a problem hiding this comment.
handleThreadsList lists all threads (.from("threads")...order(...)) without filtering by user_id. You’re relying on RLS to prevent cross-user reads, but that means:
- the endpoint’s behavior becomes tightly coupled to RLS being enabled and correct in every environment (including local/dev)
- failures/misconfigurations become data leaks
- queries can become more expensive than necessary because Postgres must apply policy filters
Given the stated goal of per-user thread isolation, you should also filter at the query level.
Suggestion
Update the query to explicitly scope by user_id.
const { data, error } = await supabase
.from("threads")
.select("id, created_at, updated_at, name, metadata")
.eq("user_id", userId)
.order("updated_at", { ascending: false });Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| async function handleThreadUpdate( | ||
| request: Request, | ||
| supabase: Awaited<ReturnType<typeof createSupabaseServerClient>>, | ||
| userId: string, | ||
| threadId: string, | ||
| ) { | ||
| const body = (await request.json().catch(() => null)) as | ||
| | { name?: string; metadata?: Record<string, unknown> } | ||
| | null; | ||
|
|
||
| if (!body) return jsonError("Invalid JSON body", 400); | ||
|
|
||
| const update: Record<string, unknown> = {}; | ||
| if (typeof body.name === "string") update.name = body.name; | ||
| if (body.metadata && typeof body.metadata === "object") { | ||
| update.metadata = body.metadata; | ||
| } | ||
|
|
||
| if (Object.keys(update).length === 0) { | ||
| return jsonError("No valid fields to update", 400); | ||
| } | ||
|
|
||
| const { error } = await supabase | ||
| .from("threads") | ||
| .update(update) | ||
| .eq("id", threadId); | ||
|
|
||
| if (error) return jsonError(error.message, 500); | ||
|
|
||
| const { data: thread, error: readError } = await supabase | ||
| .from("threads") | ||
| .select("id, created_at, updated_at, name, metadata") | ||
| .eq("id", threadId) | ||
| .maybeSingle(); | ||
|
|
||
| if (readError) return jsonError(readError.message, 500); | ||
| if (!thread) return jsonError("Not found", 404); | ||
|
|
||
| return NextResponse.json(threadFromRow(thread as unknown as ThreadRow, userId)); | ||
| } |
There was a problem hiding this comment.
handleThreadUpdate updates by id only. With RLS enabled it will likely be blocked for non-owners, but you should not depend on that as the only isolation boundary for an app-level API.
Also, as written, it updates the row even if the threadId doesn’t exist (no error), then re-reads and returns 404. That’s fine, but adding the user_id filter makes the update and read consistent and avoids revealing whether a thread exists for other users.
Suggestion
Scope the update and subsequent read to the user.
const { error } = await supabase
.from("threads")
.update(update)
.eq("id", threadId)
.eq("user_id", userId);
// ... later
const { data: thread, error: readError } = await supabase
.from("threads")
.select("id, created_at, updated_at, name, metadata")
.eq("id", threadId)
.eq("user_id", userId)
.maybeSingle();Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| const { data: historyRows, error: historyError } = await supabase | ||
| .from("messages") | ||
| .select( | ||
| [ | ||
| "role", | ||
| "content", | ||
| "additional_context", | ||
| "component", | ||
| "tool_call_request", | ||
| "created_at", | ||
| ].join(","), | ||
| ) | ||
| .eq("thread_id", persistentThreadId) | ||
| .order("created_at", { ascending: true }); | ||
|
|
||
| if (historyError) return jsonError(historyError.message, 500); | ||
|
|
||
| const { error: appendError } = await supabase.from("messages").insert({ | ||
| id: crypto.randomUUID(), | ||
| thread_id: persistentThreadId, | ||
| role: messageToAppend.role, | ||
| content: messageToAppend.content, | ||
| additional_context: messageToAppend.additionalContext ?? null, | ||
| component_state: {}, | ||
| component: messageToAppend.component ?? null, | ||
| tool_call_request: messageToAppend.toolCallRequest ?? null, | ||
| }); | ||
|
|
||
| if (appendError) return jsonError(appendError.message, 500); | ||
|
|
||
| const initialMessages = (historyRows as any[]).map((m) => ({ | ||
| role: m.role, | ||
| content: m.content, | ||
| additionalContext: m.additional_context ?? undefined, | ||
| component: m.component ?? undefined, | ||
| toolCallRequest: m.tool_call_request ?? undefined, | ||
| })); | ||
|
|
||
| const computeBody: Record<string, unknown> = { | ||
| contextKey: userId, | ||
| initialMessages, | ||
| messageToAppend, | ||
| clientTools: [], | ||
| }; | ||
|
|
||
| if (body.availableComponents != null) { | ||
| computeBody.availableComponents = body.availableComponents; | ||
| } | ||
| if (typeof body.forceToolChoice === "string") { | ||
| computeBody.forceToolChoice = body.forceToolChoice; | ||
| } | ||
| if (body.toolCallCounts && typeof body.toolCallCounts === "object") { | ||
| computeBody.toolCallCounts = body.toolCallCounts; | ||
| } | ||
|
|
||
| const tamboResponse = await tamboSseFetch("/threads/advancestream", { | ||
| method: "POST", | ||
| headers: { "content-type": "application/json" }, | ||
| body: JSON.stringify(computeBody), | ||
| signal: request.signal, | ||
| }); | ||
|
|
||
| if (!tamboResponse.ok || !tamboResponse.body) { | ||
| const text = await tamboResponse.text().catch(() => ""); | ||
| return jsonError(text || "Tambo request failed", tamboResponse.status); | ||
| } |
There was a problem hiding this comment.
In handleAdvanceStream, you load historyRows before inserting messageToAppend, then send initialMessages (derived from historyRows) along with messageToAppend to Tambo. This seems intentional (history excludes the new message), but you also persist messageToAppend to DB first.
If the downstream Tambo call fails (network, auth, 5xx), you’ve already written the user message, but you return an error and never persist assistant/tool responses. This creates “dangling” user messages and can confuse clients on retry (duplicate user message, different assistant response, etc.).
Suggestion
Consider making persistence atomic-ish:
Option A (simplest): insert messageToAppend after confirming the Tambo request is accepted (tamboResponse.ok) and you have a body.
Option B: insert it first but mark it with metadata like { pending: true }, and clear that flag when streaming completes/persists.
Option C: wrap DB inserts/updates in a Postgres function/transaction via RPC so the thread+message+final upserts can be applied consistently.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit implementing Option A (defer insert until after successful Tambo response).
| buffer += decoder.decode(value, { stream: true }).replaceAll("\r\n", "\n"); | ||
|
|
||
| while (true) { | ||
| const nl = buffer.indexOf("\n"); | ||
| if (nl === -1) break; | ||
|
|
||
| const rawLine = buffer.slice(0, nl).trim(); | ||
| buffer = buffer.slice(nl + 1); | ||
|
|
||
| if (!rawLine) continue; | ||
| if (rawLine === "data: DONE") { | ||
| pendingDone = true; | ||
| continue; | ||
| } | ||
| if (rawLine.startsWith("error: ")) { | ||
| controller.enqueue(encoder.encode(`${rawLine}\n`)); | ||
| continue; | ||
| } | ||
|
|
||
| const jsonStr = rawLine.startsWith("data: ") ? rawLine.slice(6) : rawLine; | ||
| if (!jsonStr) continue; | ||
|
|
||
| let chunk: any; | ||
| try { | ||
| chunk = JSON.parse(jsonStr); | ||
| } catch { | ||
| continue; | ||
| } | ||
|
|
||
| const dto = chunk?.responseMessageDto; | ||
| if (dto && typeof dto === "object") { | ||
| const originalMessageId = typeof dto.id === "string" ? dto.id : null; | ||
| if (originalMessageId) { | ||
| const mapped = messageIdMap.get(originalMessageId) ?? crypto.randomUUID(); | ||
| messageIdMap.set(originalMessageId, mapped); | ||
| dto.id = mapped; | ||
|
|
||
| finalMessages.set(mapped, { | ||
| ...dto, | ||
| threadId: persistentThreadId, | ||
| }); | ||
| } | ||
|
|
||
| dto.threadId = persistentThreadId; | ||
| } | ||
|
|
||
| const outLine = `data: ${JSON.stringify(chunk)}\n`; | ||
| controller.enqueue(encoder.encode(outLine)); | ||
| } |
There was a problem hiding this comment.
The SSE parser uses rawLine = buffer.slice(0, nl).trim(). Trimming is risky for SSE:
- it can remove meaningful leading spaces in
data:payloads (rare but valid) - it can change empty
data:lines semantics - it can also collapse lines that should be forwarded as-is
Additionally, SSE events can have multiple data: lines per event; this implementation treats each line independently and will silently drop/mangle multi-line events.
Suggestion
Avoid trim() and implement minimal SSE framing:
- read lines verbatim
- accumulate
data:lines until a blank line, then emit one event
At minimum, change to:
const rawLine = buffer.slice(0, nl);and handle \r separately if needed.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit with a safer SSE line parser (multi-line data: support).
| const { error: appendError } = await supabase.from("messages").insert({ | ||
| id: crypto.randomUUID(), | ||
| thread_id: persistentThreadId, | ||
| role: messageToAppend.role, | ||
| content: messageToAppend.content, | ||
| additional_context: messageToAppend.additionalContext ?? null, | ||
| component_state: {}, | ||
| component: messageToAppend.component ?? null, | ||
| tool_call_request: messageToAppend.toolCallRequest ?? null, | ||
| }); | ||
|
|
||
| if (appendError) return jsonError(appendError.message, 500); | ||
|
|
||
| const initialMessages = (historyRows as any[]).map((m) => ({ | ||
| role: m.role, | ||
| content: m.content, | ||
| additionalContext: m.additional_context ?? undefined, | ||
| component: m.component ?? undefined, | ||
| toolCallRequest: m.tool_call_request ?? undefined, | ||
| })); | ||
|
|
||
| const computeBody: Record<string, unknown> = { | ||
| contextKey: userId, | ||
| initialMessages, | ||
| messageToAppend, | ||
| clientTools: [], | ||
| }; | ||
|
|
||
| if (body.availableComponents != null) { | ||
| computeBody.availableComponents = body.availableComponents; | ||
| } | ||
| if (typeof body.forceToolChoice === "string") { | ||
| computeBody.forceToolChoice = body.forceToolChoice; | ||
| } | ||
| if (body.toolCallCounts && typeof body.toolCallCounts === "object") { | ||
| computeBody.toolCallCounts = body.toolCallCounts; | ||
| } | ||
|
|
||
| const tamboResponse = await tamboSseFetch("/threads/advancestream", { | ||
| method: "POST", | ||
| headers: { "content-type": "application/json" }, | ||
| body: JSON.stringify(computeBody), | ||
| signal: request.signal, | ||
| }); | ||
|
|
||
| if (!tamboResponse.ok || !tamboResponse.body) { | ||
| const text = await tamboResponse.text().catch(() => ""); | ||
| return jsonError(text || "Tambo request failed", tamboResponse.status); | ||
| } | ||
|
|
||
| const encoder = new TextEncoder(); | ||
| const decoder = new TextDecoder(); | ||
|
|
||
| const messageIdMap = new Map<string, string>(); | ||
| const finalMessages = new Map<string, any>(); | ||
|
|
||
| let didPersist = false; | ||
| const persistMessages = async () => { | ||
| if (didPersist) return; | ||
| didPersist = true; | ||
|
|
||
| if (finalMessages.size > 0) { | ||
| const rows = Array.from(finalMessages.values()).map((m) => ({ | ||
| id: m.id, | ||
| thread_id: persistentThreadId, | ||
| role: m.role, | ||
| content: m.content, | ||
| component_state: m.componentState ?? {}, | ||
| additional_context: m.additionalContext ?? null, | ||
| component: m.component ?? null, | ||
| tool_call_request: m.toolCallRequest ?? null, | ||
| tool_calls: m.tool_calls ?? null, | ||
| tool_call_id: m.tool_call_id ?? null, | ||
| parent_message_id: m.parentMessageId ?? null, | ||
| reasoning: m.reasoning ?? null, | ||
| reasoning_duration_ms: m.reasoningDurationMS ?? null, | ||
| error: m.error ?? null, | ||
| is_cancelled: m.isCancelled ?? false, | ||
| metadata: m.metadata ?? null, | ||
| })); | ||
|
|
||
| const { error } = await supabase.from("messages").upsert(rows); | ||
| if (error) { | ||
| throw new Error(error.message); | ||
| } | ||
| } |
There was a problem hiding this comment.
persistMessages does upsert(rows) with no conflict target or dedupe logic shown. If the table has a PK on id (it does), that’s fine, but you’re mapping Tambo dto.id to random UUIDs and persisting those.
However, you also insert messageToAppend with a random UUID earlier. If Tambo also echoes the appended user message back as a response DTO, you may end up with two persisted copies of the same logical message (one from the initial insert, one from the streamed final upsert), since the IDs differ.
Suggestion
Introduce a stable ID strategy to prevent duplicates. Common options:
- Pass your generated
messageToAppendID through to Tambo (if supported) and map it back. - Detect/skip persisting streamed DTOs that represent the user message you already inserted (e.g., compare role+content hash + created_at proximity + tool_call_id).
- Store a mapping in
metadatafor the appended message linkingtambo_original_idto your DB id.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit that persists tambo_original_id in messages.metadata and uses it to dedupe/upsert deterministically.
| const stream = new ReadableStream<Uint8Array>({ | ||
| async pull(controller) { | ||
| const { done, value } = await reader.read(); | ||
| if (done) { | ||
| try { | ||
| await persistMessages(); | ||
| if (pendingDone) { | ||
| controller.enqueue(encoder.encode("data: DONE\n")); | ||
| } | ||
| } catch (error) { | ||
| console.error("Failed to persist streamed messages", { | ||
| error, | ||
| userId, | ||
| threadId: persistentThreadId, | ||
| messageCount: finalMessages.size, | ||
| }); | ||
|
|
||
| controller.enqueue( | ||
| encoder.encode( | ||
| "error: Failed to persist conversation state, some messages may be missing.\n", | ||
| ), | ||
| ); | ||
| } | ||
| controller.close(); | ||
| return; | ||
| } | ||
|
|
||
| buffer += decoder.decode(value, { stream: true }).replaceAll("\r\n", "\n"); | ||
|
|
||
| while (true) { | ||
| const nl = buffer.indexOf("\n"); | ||
| if (nl === -1) break; | ||
|
|
||
| const rawLine = buffer.slice(0, nl).trim(); | ||
| buffer = buffer.slice(nl + 1); | ||
|
|
||
| if (!rawLine) continue; | ||
| if (rawLine === "data: DONE") { | ||
| pendingDone = true; | ||
| continue; | ||
| } | ||
| if (rawLine.startsWith("error: ")) { | ||
| controller.enqueue(encoder.encode(`${rawLine}\n`)); | ||
| continue; | ||
| } | ||
|
|
||
| const jsonStr = rawLine.startsWith("data: ") ? rawLine.slice(6) : rawLine; | ||
| if (!jsonStr) continue; | ||
|
|
||
| let chunk: any; | ||
| try { | ||
| chunk = JSON.parse(jsonStr); | ||
| } catch { | ||
| continue; | ||
| } | ||
|
|
||
| const dto = chunk?.responseMessageDto; | ||
| if (dto && typeof dto === "object") { | ||
| const originalMessageId = typeof dto.id === "string" ? dto.id : null; | ||
| if (originalMessageId) { | ||
| const mapped = messageIdMap.get(originalMessageId) ?? crypto.randomUUID(); | ||
| messageIdMap.set(originalMessageId, mapped); | ||
| dto.id = mapped; | ||
|
|
||
| finalMessages.set(mapped, { | ||
| ...dto, | ||
| threadId: persistentThreadId, | ||
| }); | ||
| } | ||
|
|
||
| dto.threadId = persistentThreadId; | ||
| } | ||
|
|
||
| const outLine = `data: ${JSON.stringify(chunk)}\n`; | ||
| controller.enqueue(encoder.encode(outLine)); | ||
| } |
There was a problem hiding this comment.
SSE correctness: outgoing stream missing blank line delimiter
Server-Sent Events require events to be terminated by a blank line (\n\n). This implementation enqueues data: ...\n and data: DONE\n but not the required extra newline. Some clients will buffer indefinitely or parse incorrectly.
Related: the error path enqueues error: ...\n which is not a standard SSE field (clients typically listen for event: + data:). If the Tambo client expects this custom error: prefix, fine—but the missing \n\n is still a protocol issue.
Suggestion
Terminate SSE events with a double newline:
- For normal data chunks:
const outLine = `data: ${JSON.stringify(chunk)}\n\n`;
controller.enqueue(encoder.encode(outLine));- For DONE:
controller.enqueue(encoder.encode("data: DONE\n\n"));If you also want standards-friendly errors, consider event: error\ndata: ...\n\n.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit fixing the SSE framing.
| const { error: threadUpdateError } = await supabase | ||
| .from("threads") | ||
| .update({ updated_at: new Date().toISOString() }) | ||
| .eq("id", persistentThreadId); | ||
|
|
||
| if (threadUpdateError) { | ||
| throw new Error(threadUpdateError.message); | ||
| } |
There was a problem hiding this comment.
Persisting threads.updated_at should not bypass DB authority
persistMessages() updates threads.updated_at by sending an explicit ISO string:
.update({ updated_at: new Date().toISOString() })But the migration already adds a trigger (threads_set_updated_at) to set updated_at = now() on update. Setting the value from the app defeats that purpose and risks clock skew.
You can update a no-op field (or re-set name to itself) or update updated_at using a DB function (if you prefer), but best is to let the trigger handle the timestamp.
Suggestion
Remove app-supplied timestamps and rely on the trigger. For example, add a touch boolean/field, or just update a benign field:
const { error: threadUpdateError } = await supabase
.from("threads")
.update({})
.eq("id", persistentThreadId);If Supabase rejects empty updates, add a dedicated touched_at or last_activity_at column (recommended) and set it via now() in the DB.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit that removes the client-set updated_at and implements a safer touch mechanism.
| const headers = new Headers(request.headers); | ||
| headers.set("x-api-key", apiKey); | ||
| headers.delete("host"); | ||
| headers.delete("content-length"); | ||
|
|
||
| const body = request.body ? request.clone().body : undefined; | ||
|
|
||
| const response = await fetch(targetUrl, { | ||
| method: request.method, | ||
| headers, | ||
| body, | ||
| redirect: "manual", | ||
| }); |
There was a problem hiding this comment.
Proxy security: forwarding cookie/authorization headers to Tambo
proxyToTambo() clones all incoming headers and adds x-api-key. That likely forwards cookie and possibly authorization to the upstream Tambo API. This is an unnecessary data leak (session cookies, CSRF tokens, etc.) to a third-party service.
Given this is an authenticated proxy, you should explicitly allowlist headers instead of pass-through, or at least strip sensitive ones (cookie, authorization, x-forwarded-* as needed).
Suggestion
Switch to an allowlist strategy. Example:
const headers = new Headers();
headers.set("x-api-key", apiKey);
headers.set("accept", request.headers.get("accept") ?? "application/json");
headers.set("content-type", request.headers.get("content-type") ?? "application/json");Or minimally strip sensitive headers:
headers.delete("cookie");
headers.delete("authorization");Reply with "@CharlieHelps yes please" if you'd like me to add a commit implementing a safe header allowlist for the proxy.
| const { thread } = useTambo(); | ||
| const { | ||
| suggestions: generatedSuggestions, | ||
| selectedSuggestionId, | ||
| accept, | ||
| generateResult: { isPending: isGenerating, error }, | ||
| } = useTamboSuggestions({ maxSuggestions }); | ||
|
|
||
| // Combine initial and generated suggestions, but only use initial ones when thread is empty | ||
| const [selectedSuggestionId, setSelectedSuggestionId] = | ||
| React.useState<string | null>(null); | ||
| const { setValue: setInputValue } = useTamboThreadInput(); | ||
|
|
||
| const accept = React.useCallback( | ||
| async ({ suggestion }: { suggestion: Suggestion }) => { | ||
| setInputValue(suggestion.detailedSuggestion); | ||
| setSelectedSuggestionId(suggestion.id); | ||
| }, | ||
| [setInputValue], | ||
| ); | ||
|
|
||
| const isGenerating = false; | ||
| const error: Error | null = null; | ||
|
|
||
| // Only use pre-seeded suggestions when thread is empty. | ||
| const suggestions = React.useMemo(() => { | ||
| // Only use pre-seeded suggestions if thread is empty | ||
| if (!thread?.messages?.length && initialSuggestions.length > 0) { | ||
| return initialSuggestions.slice(0, maxSuggestions); | ||
| } | ||
| // Otherwise use generated suggestions | ||
| return generatedSuggestions; | ||
| return []; | ||
| }, [ | ||
| thread?.messages?.length, | ||
| generatedSuggestions, | ||
| initialSuggestions, | ||
| maxSuggestions, | ||
| ]); |
There was a problem hiding this comment.
message-suggestions.tsx removes useTamboSuggestions and hard-codes isGenerating=false, error=null, and returns [] suggestions once the thread has messages.
That effectively disables generated suggestions entirely after the first message. If that’s intended as a temporary workaround, it should be explicit; otherwise it’s a functional regression (UI will show nothing where it previously generated suggestions).
Suggestion
If you meant to keep suggestions generation, reintroduce useTamboSuggestions (or a replacement) and only fall back to initialSuggestions when the thread is empty.
If you meant to disable suggestions for now, gate the UI behind a prop/feature flag and remove dead state (isGenerating, error) to reduce misleading code.
Reply with "@CharlieHelps yes please" if you'd like me to add a commit that restores generated suggestions while still supporting pre-seeded initialSuggestions.
| -- One-time legacy cleanup | ||
| delete from public.threads where user_id is null; | ||
|
|
||
| delete from public.messages | ||
| where thread_id not in (select id from public.threads); |
There was a problem hiding this comment.
The migration includes destructive "one-time legacy cleanup":
delete from public.threads where user_id is null;
delete from public.messages where thread_id not in (select id from public.threads);This is dangerous in real deployments:
- it irreversibly deletes data on migration apply
- it can delete data unexpectedly if the schema is applied to an environment with partially migrated/seeded data
threads.user_idisnot nullin this same migration, souser_id is nullshould never be true for newly created rows; this cleanup is either obsolete or indicates pre-existing inconsistent schema/data.
Destructive data cleanup should be a separate, explicitly run script (or at least guarded behind checks).
Suggestion
Remove these deletes from the migration and handle legacy cleanup in a separate, manually-invoked script (or a guarded DO block that verifies you’re in a dev environment).
For example, create supabase/scripts/legacy_cleanup.sql and document it, or wrap in something like:
-- only in non-production, if you have a reliable marker
DO $$
BEGIN
IF current_setting('app.env', true) = 'development' THEN
DELETE ...;
END IF;
END $$;Reply with "@CharlieHelps yes please" if you'd like me to add a commit that removes the destructive cleanup from the migration and adds a separate cleanup script + README note.
There was a problem hiding this comment.
Key security/correctness gaps remain: proxyToTambo() currently forwards sensitive headers (cookie/authorization) upstream, and the SSE response framing is not spec-compliant (missing \n\n event delimiters) with a brittle line parser (trim() + no multi-line support). handleAdvanceStream() persists the user message before the upstream call, which can leave dangling state on upstream failure. Finally, the migration contains destructive deletes that should not run as part of normal schema migration application.
Summary of changes
Summary of changes
Proxying Tambo through authenticated server routes
- Updated
TamboProviderusage to route requests viatamboUrl="/api/tambo"and setapiKey="unused"in:README.mdsrc/app/interactables/page.tsx- new
src/app/chat/chat-client.tsx
- Switched to a server-only secret by replacing
NEXT_PUBLIC_TAMBO_API_KEYwithTAMBO_API_KEYinexample.env.local.
New authenticated API surface for threads/messages
- Added
src/app/api/tambo/[...path]/route.tsimplementing:- per-user thread operations (list/retrieve/update/generate-name/cancel/delete)
- per-thread messages endpoints (
GET/POST /threads/:id/messages,PUT /threads/:id/messages/:messageId/component-state) - SSE
/threads/advancestreampass-through that persists streamed messages to Supabase - a generic proxy fallback to upstream Tambo for unhandled paths
UI/auth wiring changes
- Split chat page into a server component (
src/app/chat/page.tsx) that enforces auth +TAMBO_API_KEYpresence, rendering a new client componentsrc/app/chat/chat-client.tsx. - Expanded Supabase auth middleware matching to cover
/chat/:path*and/api/tambo/:path*.
Supabase schema + RLS
- Added migrations:
supabase/migrations/20260207_per_user_threads.sql(tables, trigger, RLS select/insert/update policies + legacy cleanup deletes)supabase/migrations/20260207_per_user_threads_delete_policies.sql(delete policies + repeats other policies)
Suggestions UX change
src/components/tambo/message-suggestions.tsxremoved generated suggestions (useTamboSuggestions) and now only shows pre-seeded suggestions when a thread is empty; after that it returns[]and usesuseTamboThreadInputto set the input value on accept.
Implements strict per-user visibility for chat threads/messages by making Supabase the source of truth and blocking any thread/message access that isn’t owned by the authenticated user.
Resolves #11.
Changes
threads.user_id = auth useron all/api/tambo/threads/*operations (list/retrieve/update/generate-name/advancestream/delete).POST /threads/:id/messagesandPUT /threads/:id/messages/:messageId/component-stateagainst Supabase so the React SDK doesn’t fall back to proxying these to Tambo with the server API key.threads+messages(select/insert/update/delete) to harden isolation at the database layer.Verification
reviewChanges skipped:
src/app/api/tambo/[...path]/route.ts: suggestions about deeper routing refactors / richer error payloads / additional logging are out of scope for fixing cross-user thread access.src/app/api/tambo/[...path]/route.ts:cancelendpoint remains a no-op (returnstrue) to match current client expectations; ownership is now enforced.src/app/api/tambo/[...path]/route.ts:messages.createreturns{ id }(callers don’t use the response today).